business unit
Trust, Governance, and AI Decision Making
IBM's Global Leader on Responsible AI and AI Governance, Francesca Rossi, arrived at her current area of focus after a 2014 sabbatical at the Harvard Radcliffe Institute, which inspired her to think beyond her training as an academic researcher and incorporate both humanistic and technological perspectives into the development of AI systems. In the intervening years, she helped build IBM's internal AI Ethics Board and foster external partnerships to shape best practices for responsible AI. Here, we talk about trust, governance, and what these issues have to do with AI decision making. The ethical issues around the use of AI evolved with the technology's capabilities. Traditional machine learning approaches introduced issues like fairness, explainability, privacy, transparency, and so on.
Large language models require a new form of oversight: capability-based monitoring
Kellogg, Katherine C., Ye, Bingyang, Hu, Yifan, Savova, Guergana K., Wallace, Byron, Bitterman, Danielle S.
The rapid adoption of large language models (LLMs) in healthcare has been accompanied by scrutiny of their oversight. Existing monitoring approaches, inherited from traditional machine learning (ML), are task-based and founded on assumed performance degradation arising from dataset drift. In contrast, with LLMs, inevitable model degradation due to changes in populations compared to the training dataset cannot be assumed, because LLMs were not trained for any specific task in any given population. We therefore propose a new organizing principle guiding generalist LLM monitoring that is scalable and grounded in how these models are developed and used in practice: capability-based monitoring. Capability-based monitoring is motivated by the fact that LLMs are generalist systems whose overlapping internal capabilities are reused across numerous downstream tasks. Instead of evaluating each downstream task independently, this approach organizes monitoring around shared model capabilities, such as summarization, reasoning, translation, or safety guardrails, in order to enable cross-task detection of systemic weaknesses, long-tail errors, and emergent behaviors that task-based monitoring may miss. We describe considerations for developers, organizational leaders, and professional societies for implementing a capability-based monitoring approach. Ultimately, capability-based monitoring will provide a scalable foundation for safe, adaptive, and collaborative monitoring of LLMs and future generalist artificial intelligence models in healthcare.
- North America > United States > Massachusetts > Suffolk County > Boston (0.05)
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.04)
- North America > United States > Massachusetts > Middlesex County > Waltham (0.04)
- Health & Medicine > Therapeutic Area > Oncology (1.00)
- Health & Medicine > Health Care Providers & Services (0.71)
An Adaptive Responsible AI Governance Framework for Decentralized Organizations
Meimandi, Kiana Jafari, Reuel, Anka, Aranguiz-Dias, Gabriela, Rahama, Hatim, Ayadi, Ala-Eddine, Boullier, Xavier, Verdo, Jérémy, Montanie, Louis, Kochenderfer, Mykel
This paper examines the assessment challenges of Responsible AI (RAI) governance efforts in globally decentralized organizations through a case study collaboration between a leading research university and a multinational enterprise. While there are many proposed frameworks for RAI, their application in complex organizational settings with distributed decision-making authority remains underexplored. Our RAI assessment, conducted across multiple business units and AI use cases, reveals four key patterns that shape RAI implementation: (1) complex interplay between group-level guidance and local interpretation, (2) challenges translating abstract principles into operational practices, (3) regional and functional variation in implementation approaches, and (4) inconsistent accountability in risk oversight. Based on these findings, we propose an Adaptive RAI Governance (ARGO) Framework that balances central coordination with local autonomy through three interdependent layers: shared foundation standards, central advisory resources, and contextual local implementation. We contribute insights from academic-industry collaboration for RAI assessments, highlighting the importance of modular governance approaches that accommodate organizational complexity while maintaining alignment with responsible AI principles. These lessons offer practical guidance for organizations navigating the transition from RAI principles to operational practice within decentralized structures.
- South America > Argentina > Patagonia > Río Negro Province > Viedma (0.04)
- North America > United States > California > Santa Clara County > Palo Alto (0.04)
- Europe > Italy (0.04)
- Asia > China (0.04)
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
RQM+ Acquires Giotto Compliance
RQM, the world's leading MedTech service provider, announced the acquisition of Giotto Compliance from Giotto.ai. Giotto Compliance is a global, all-in-one artificial intelligence (AI) platform designed to increase the efficiency and effectiveness of regulatory reports and filings across the product development lifecycle for medical device and in vitro diagnostics manufacturers. We have only begun to scratch the surface of what can be achieved by integrating Giotto Compliance AI technology throughout our consulting, clinical trials, laboratory and reimbursement services," said RQM CEO Margaret Keegan. "We are fortunate to have our Chief Digital and Technology Officer Alaric Jackson lead this new business unit. He will expand the capabilities of Giotto Compliance to enhance our current services as well as offer a standalone solution that can be leveraged to streamline regulatory document development while maintaining compliance with global regulations." Partners with ClusterPower to Unleash the Power of Artificial Intelligence Giotto Compliance will operate as an independent RQM business unit and serve its growing customer base. The business will be led by Jackson, who has more than 20 years of technology leadership experience within the contract research organization (CRO) and pharmaceutical industries. In previous roles, he automated and streamlined regulatory solutions to enhance compliance, productivity and quality using advanced technologies and digital workflows. "We are excited to offer the MedTech industry a step change using cutting-edge AI that improves quality while increasing speed-to-market," said Jackson. "Currently, Giotto Compliance reduces the burden of data collection, analysis and creation of regulatory documentation, such as clinical evaluation reports, allowing our team to focus on creative solutions, complex problem solving and impactful work.
My life as ML Developer #1
As a Machine Learning Developer (ML Developer), I am part of a Digital Hub (also called Innovation Lab in other companies). But what makes working in a digital hub so special for me? Our HDH (Heraeus Digital Hub) is the main point of contact for all Business Units when it comes to the topics of AI, Data Science, Machine Learning, IIoT, Robotics or something like this. From this list of topics, you can already guess that we are a very diverse team of experts. In my opinion, this makes working at the HDH particularly interesting.
IT & Strategy Talent Programme - Junior Data Engineer at Vattenfall - Solna, Sweden
Vattenfall is one of Europe's largest producers and retailers of electricity and heat. Our main markets are Sweden, Germany, the Netherlands, Denmark, and the UK. The Vattenfall Group has approximately 20,000 employees. We have been electrifying industries, powering homes and transforming life through innovation for more than 100 years. We now want to make fossil free living possible within one generation and we are driving the transition to a sustainable energy system.
Artificial intelligence strategists are drowning in data
While it may take many by surprise, that's the fresh call to action among analysts paying close attention to how companies are – or aren't – factoring artificial intelligence (AI) and machine learning (ML) into their data management plans and playbooks. After years of reading sensational stories about the limitless potential of intelligent machines, stakeholders and C-suite, executives in particular appear to be confused about the best course of action to take. Commercial missteps and the total failure of some products have resulted. Experts say it doesn't have to be this way. "AI and ML has become crucial and necessary for nearly all businesses in every sector," says Elliott Young, CTO, Dell Technologies UK. "In the same way that businesses have had to transform digitally and become digital-first, companies are going to need AI and ML to remain competitive. Those on the path towards this are already reaping the benefits of being able to make decisions driven by predictive analytics."
Data ethics: What it means and what it takes
Now more than ever, every company is a data company. By 2025, individuals and companies around the world will produce an estimated 463 exabytes of data each day, 1 1. Jeff Desjardins, "How much data is generated each day?" World Economic Forum, April 17, 2019. With that in mind, most businesses have begun to address the operational aspects of data management--for instance, determining how to build and maintain a data lake or how to integrate data scientists and other technology experts into existing teams. Fewer companies have systematically considered and started to address the ethical aspects of data management, which could have broad ramifications and responsibilities. If algorithms are trained with biased data sets or data sets are breached, sold without consent, or otherwise mishandled, for instance, companies can incur significant reputational and financial costs. Board members could even be held personally liable.
- Law (1.00)
- Information Technology > Security & Privacy (1.00)
- Government (1.00)
- Banking & Finance (1.00)
- Information Technology > Security & Privacy (1.00)
- Information Technology > Artificial Intelligence (1.00)
- Information Technology > Information Management (0.90)
- Information Technology > Data Science > Data Mining > Big Data (0.34)
Meet the A3 Artificial Intelligence Tech Strategy Board Members
In the first of our series of A3 interviews with AI leaders, John Lizzi, the Executive Leader - Robotics and Autonomous Systems at GE, discusses how to develop AI projects that focus on business objectives. Lizzi, who serves as the chair of the Association for Advancing Automation's Artificial Intelligence Technology Strategy Board, says that AI is enabling intelligent systems to operate in the complex and uncertain world. Check out his advice on how to craft your AI strategy. How would you advise companies to choose their artificial intelligence projects – and what questions do they need to answer before they begin? Win hearts and minds: I think it's important to note that injecting new and disruptive technology into a business is hard no matter what technology you're talking about.
Integrated enterprises need to optimise the use of AI for better CX
To get the most out of AI and ML to better meet changing customer needs, enterprises need to start integrating their business units, operations and datasets into a more consolidated entity. This is according to Archana Arakkal, Machine Learning Engineer at Synthesis, who was speaking ahead of a webinar on the Customer Service of the Future, to be hosted by Synthesis, AWS and Salesforce next month. Every engagement between the customer and the brand is part of the overall customer experience, and customers expect a great deal more of this experience than they did 10 years ago, says Arakkal. "For example, traditional marketing and advertising has had to evolve beyond the old'spray and pray' approach, since customers now expect hyper-personalisation," she says. "Marketing is getting smarter about targeting customers based on their personal needs, their digital footprint, what platforms they use at what time of day. Customers aren't blind – they know brands have customer data on what they have bought before, their location and interests, so there is an inherent expectation that when they are targeted with a product, it will be a product that makes sense to them." While marketing and advertising are becoming better at personalising offers, there often remains a gap in their understanding of the customer – in the silos of data within the various brand business units, Arakkal says.